Is Secure and Trusted AI Possible? The EU Leads the Way

Back to News

The European Union Agency for Cybersecurity (ENISA) releases 4 reports on the most far-reaching challenges in artificial intelligence (AI) on the occasion of the conference on the supervision of secure and trustworthy AI.

Recent developments in Artificial Intelligence (AI) systems have drawn the world’s attention to the most important aspects of cybersecurity associated with them. The AI Cybersecurity conference aimed to provide a platform for the wider community to share their experiences and to discuss the challenges and opportunities. It promoted cooperation within the AI cybersecurity community in order to reflect upon the proposal for an EU regulation on AI. The proposal could allow the EU to become a pioneer in regulating AI.

Juhan Lepassaar, Executive Director of the EU Agency for Cybersecurity, said: If we want to both secure AI systems and also ensure privacy, we need to scrutinise how these systems work. ENISA is looking into the technical complexity of AI to best mitigate the cybersecurity risks. We also need to strike the right balance between security and system performance. The conference today will allow to brainstorm on such challenges to envisage all possible measures such as the security by design approach. With generative AI fast developing, we are ready to get up to speed to best support policy makers as we entered this new phase of the AI revolution.”

The key questions on the tables

The 4 panels at today’s conference addressed the cybersecurity technical challenges of AI chatbots, the research and innovation needs, and the security considerations for the cybersecurity certification of AI systems. Supporting policy makers including national authorities with guidance on best practices and user cases while at the same time advancing EU-wide rules gives the EU an opportunity to lead globally in creating a secure and trustworthy AI. 

Speeches from high-level speakers focused on generative AI, on the legal and policy perspective of the upcoming AI Act and on the measures already taken by the German Cybersecurity Agency, BSI.

About the new AI reports:

  • Setting good cybersecurity practices for AI:

The report stands as a scalable framework to guide national cybersecurity authorities (NCAs) and the AI community in order to secure AI systems, operations and processes. The framework consists of three layers (cybersecurity foundations, AI-specific cybersecurity and sector-specific cybersecurity for AI) and aims to provide a step-by-step approach on following good cybersecurity practices in order to ensure the trustworthiness of AI systems.

  • Cybersecurity and privacy in AI: two use cases: forecasting demands on electricity grids and medical imaging diagnosis

Both reports outline cybersecurity and privacy threats as well as vulnerabilities that can be exploited in each use case. The analysis focused on machine learning related threats and vulnerabilities, while taking into account broader AI considerations. However, the focus of the work is on privacy issues, as these have now become one of the most important challenges facing society today. Security and privacy being intimately related, and both equally important, a balance must be found to meet related requirements. The report reveals though that the efforts to optimise security and privacy can often come at the expense of system performance. 

  • AI and cybersecurity research:

The report identifies 5 key research needs for further research on AI for cybersecurity and on securing AI for future EU policy developments and funding initiatives. Such needs include the development of penetration testing tools to identify security vulnerabilities or the development of standardised frameworks to assess privacy and confidentiality among others.

Target audience

  • Private or public entities including the cybersecurity and privacy community: to support risk analysis, identification of cybersecurity and privacy threats and selecting of the most adequate security and privacy controls;
  • AI technical community, AI cybersecurity and privacy experts and AI experts with an interest in developing secure solutions and adding security and privacy by design to their solutions.

Further Information

Multilayer framework for good cybersecurity practices for AI – ENISA report 2023

Cybersecurity and privacy in AI – Forecasting demand on electricity grids – ENISA report 2023

Cybersecurity and privacy in AI – Medical imaging diagnosis – ENISA report 2023

Artificial Intelligence and Cybersecurity Research ENISA report 2023

Cybersecurity of AI and standardisation – ENISA report March 2023

Mind the Gap in Standardisation of Cybersecurity for Artificial Intelligence – ENISA Press Release

Securing Machine Learning Algorithms – ENISA report 2021

ENISA topic – Artificial Intelligence

The proposal AI Act

The proposal Cyber Resilience Act

Contact

For press questions and interviews, please contact press (at) enisa.europa.eu